在这封信中,我们提出了一个可靠的实时,实时的,惯性导航系统(INS) - 中心的GNSS-视觉惯性导航系统(IC-GVIN),用于轮式机器人,其中在两个状态估计中都可以完全利用精确的INS和视觉过程。为了改善系统的鲁棒性,通过严格的离群策略,在整个基于关键帧的视觉过程中采用了INS信息。采用GNSS来执行IC-GVIN的准确和方便的初始化,并进一步用于在大规模环境中实现绝对定位。 IMU,Visual和GNSS测量值紧密地融合在因子图优化的框架内。进行了专用的实验,以评估轮式机器人上IC-GVIN的鲁棒性和准确性。 IC-GVIN在带有移动对象的各种视觉降低场景中表现出卓越的鲁棒性。与最先进的视觉惯性导航系统相比,所提出的方法在各种环境中都能提高鲁棒性和准确性。我们开源的代码与GitHub上的数据集结合在一起
translated by 谷歌翻译
本文提出了在不同运动条件下不同帧中的惯性测量单元(IMU)预融合的统一数学框架。导航状态精确地离散化为三部分:本地增量,全局状态和全局增量。全局增量可以在不同的帧中计算,例如局部大地测量导航帧和地球中心固定帧。称为IMU预融合的本地增量可以根据代理的运动和IMU的等级的不同假设计算。因此,在不同环境下的惯性集成导航系统的在线状态估计更准确和更方便。
translated by 谷歌翻译
我们提出了LOC-NERF,这是一种基于实时视觉的机器人定位方法,结合了蒙特卡洛定位和神经辐射场(NERF)。我们的系统使用预先训练的NERF模型作为环境的地图,可以使用RGB摄像机作为机器人唯一的外部感受传感器实时定位。尽管神经辐射场在计算机视觉和图形中看到了重要的视觉渲染应用,但他们发现机器人技术的用途有限。现有的基于NERF的本地化方法需要良好的初始姿势猜测和重大的计算,这使得它们对于实时机器人技术应用不切实际。通过使用Monte Carlo定位作为使用NERF MAP模型估算姿势的主力,LOC-NERF能够比ART的状态更快地执行本地化,并且不依赖初始姿势估计。除了测试合成数据外,我们还使用ClearPath Jackal UGV收集的实际数据运行系统,并首次证明了使用神经光辉场进行实时全球定位的能力。我们在https://github.com/mit-spark/loc-nerf上公开代码。
translated by 谷歌翻译
我们考虑了一个类别级别的感知问题,其中给定的2D或3D传感器数据描绘了给定类别的对象(例如,汽车),并且必须重建尽管级别的可变性,但必须重建对象的3D姿势和形状(即,不同的汽车模型具有不同的形状)。我们考虑了一个主动形状模型,其中 - 对于对象类别 - 我们获得了一个潜在的CAD模型库,描述该类别中的对象,我们采用了标准公式,其中姿势和形状是通过非非2D或3D关键点估算的-convex优化。我们的第一个贡献是开发PACE3D*和PACE2D*,这是第一个使用3D和2D关键点进行姿势和形状估计的最佳最佳求解器。这两个求解器都依赖于紧密(即精确)半决赛的设计。我们的第二个贡献是开发两个求解器的异常刺激版本,命名为PACE3D#和PACE2D#。为了实现这一目标,我们提出了Robin,Robin是一种一般的图理论框架来修剪异常值,该框架使用兼容性超图来建模测量的兼容性。我们表明,在类别级别的感知问题中,这些超图可以是通过关键点(以2D)或其凸壳(以3D为单位)构建的,并且可以通过最大的超级计算来修剪许多异常值。最后的贡献是广泛的实验评估。除了在模拟数据集和Pascal数据集上提供消融研究外,我们还将求解器与深关键点检测器相结合,并证明PACE3D#在Apolloscape数据集中在车辆姿势估算中改进了最新技术,并且其运行时间是兼容的使用实际应用。
translated by 谷歌翻译
在未知和大规模的地下环境中,与一组异质的移动机器人团队进行搜救,需要高精度的本地化和映射。在复杂和感知衰落的地下环境中,这一至关重要的需求面临许多挑战,因为在船上感知系统需要在非警官条件下运作(由于黑暗和灰尘,坚固而泥泞的地形以及自我的存在以及自我的存在,都需要运作。 - 类似和模棱两可的场景)。在灾难响应方案和缺乏有关环境的先前信息的情况下,机器人必须依靠嘈杂的传感器数据并执行同时定位和映射(SLAM)来构建环境的3D地图,并定位自己和潜在的幸存者。为此,本文报告了Team Costar在DARPA Subterranean Challenge的背景下开发的多机器人大满贯系统。我们通过合并一个可适应不同的探针源和激光镜配置的单机器人前端界面来扩展以前的工作,即LAMP,这是一种可伸缩的多机前端,以支持大型大型和内部旋转循环闭合检测检测规模环境和多机器人团队,以及基于渐变的非凸度的稳健后端,配备了异常弹性姿势图优化。我们提供了有关多机器人前端和后端的详细消融研究,并评估美国跨矿山,发电厂和洞穴收集的挑战现实世界中的整体系统性能。我们还发布了我们的多机器人后端数据集(以及相应的地面真相),可以作为大规模地下大满贯的具有挑战性的基准。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
This paper focuses on designing efficient models with low parameters and FLOPs for dense predictions. Even though CNN-based lightweight methods have achieved stunning results after years of research, trading-off model accuracy and constrained resources still need further improvements. This work rethinks the essential unity of efficient Inverted Residual Block in MobileNetv2 and effective Transformer in ViT, inductively abstracting a general concept of Meta-Mobile Block, and we argue that the specific instantiation is very important to model performance though sharing the same framework. Motivated by this phenomenon, we deduce a simple yet efficient modern \textbf{I}nverted \textbf{R}esidual \textbf{M}obile \textbf{B}lock (iRMB) for mobile applications, which absorbs CNN-like efficiency to model short-distance dependency and Transformer-like dynamic modeling capability to learn long-distance interactions. Furthermore, we design a ResNet-like 4-phase \textbf{E}fficient \textbf{MO}del (EMO) based only on a series of iRMBs for dense applications. Massive experiments on ImageNet-1K, COCO2017, and ADE20K benchmarks demonstrate the superiority of our EMO over state-of-the-art methods, \eg, our EMO-1M/2M/5M achieve 71.5, 75.1, and 78.4 Top-1 that surpass \textbf{SoTA} CNN-/Transformer-based models, while trading-off the model accuracy and efficiency well.
translated by 谷歌翻译
Benefiting from the intrinsic supervision information exploitation capability, contrastive learning has achieved promising performance in the field of deep graph clustering recently. However, we observe that two drawbacks of the positive and negative sample construction mechanisms limit the performance of existing algorithms from further improvement. 1) The quality of positive samples heavily depends on the carefully designed data augmentations, while inappropriate data augmentations would easily lead to the semantic drift and indiscriminative positive samples. 2) The constructed negative samples are not reliable for ignoring important clustering information. To solve these problems, we propose a Cluster-guided Contrastive deep Graph Clustering network (CCGC) by mining the intrinsic supervision information in the high-confidence clustering results. Specifically, instead of conducting complex node or edge perturbation, we construct two views of the graph by designing special Siamese encoders whose weights are not shared between the sibling sub-networks. Then, guided by the high-confidence clustering information, we carefully select and construct the positive samples from the same high-confidence cluster in two views. Moreover, to construct semantic meaningful negative sample pairs, we regard the centers of different high-confidence clusters as negative samples, thus improving the discriminative capability and reliability of the constructed sample pairs. Lastly, we design an objective function to pull close the samples from the same cluster while pushing away those from other clusters by maximizing and minimizing the cross-view cosine similarity between positive and negative samples. Extensive experimental results on six datasets demonstrate the effectiveness of CCGC compared with the existing state-of-the-art algorithms.
translated by 谷歌翻译